Goto

Collaborating Authors

 information overlooked


PRIOR: Personalized Prior for Reactivating the Information Overlooked in Federated Learning.

Neural Information Processing Systems

Classical federated learning (FL) enables training machine learning models without sharing data for privacy preservation, but heterogeneous data characteristic degrades the performance of the localized model. Personalized FL (PFL) addresses this by synthesizing personalized models from a global model via training on local data. Such a global model may overlook the specific information that the clients have been sampled. In this paper, we propose a novel scheme to inject personalized prior knowledge into the global model in each client, which attempts to mitigate the introduced incomplete information problem in PFL. At the heart of our proposed approach is a framework, the $\textit{PFL with Bregman Divergence}$ (pFedBreD), decoupling the personalized prior from the local objective function regularized by Bregman divergence for greater adaptability in personalized scenarios.


Supplementary of PRIOR: Personalized Prior for Reactivating the Information Overlooked in Federated Learning A Glossary, Some Basic Knowledge and Details about Implementations

Neural Information Processing Systems

A.1 Glossary The main notations in this paper are shown in Table 5. E[ D (X, y)] (15) Eq. (16) is the definition of Bregman divergence: D Hence, the selected function g should be convex. Besides, the non-maximum entropy rule approach is also worth considering, but we focus on maximum entropy prior in this section. In this section, inspired by MAML, we briefly introduce a meta-step-based implementation method. The mean of the SX-family prior in Eq. (8) is used in regular term, which can be In this work, we use the approximation Bayesian methods. The local training process based on regular terms differs from Bayesian learning based on sampling, i.e., each time a model needs to be obtained by sampling the model There are three parts in Eq. (13) we need to deal with, and the first-order methods are as shown below: In recent years, PFL has found use not only in predictive tasks like mobile device input methods but also in areas where privacy is paramount, such as healthcare and finance.


PRIOR: Personalized Prior for Reactivating the Information Overlooked in Federated Learning.

Neural Information Processing Systems

Classical federated learning (FL) enables training machine learning models without sharing data for privacy preservation, but heterogeneous data characteristic degrades the performance of the localized model. Personalized FL (PFL) addresses this by synthesizing personalized models from a global model via training on local data. Such a global model may overlook the specific information that the clients have been sampled. In this paper, we propose a novel scheme to inject personalized prior knowledge into the global model in each client, which attempts to mitigate the introduced incomplete information problem in PFL. At the heart of our proposed approach is a framework, the \textit{PFL with Bregman Divergence} (pFedBreD), decoupling the personalized prior from the local objective function regularized by Bregman divergence for greater adaptability in personalized scenarios.